10 research outputs found

    Real-time multimodal image registration with partial intraoperative point-set data

    Get PDF
    We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial

    Multimodality Biomedical Image Registration Using Free Point Transformer Networks

    Get PDF
    We describe a point-set registration algorithm based on a novel free point transformer (FPT) network, designed for points extracted from multimodal biomedical images for registration tasks, such as those frequently encountered in ultrasound-guided interventional procedures. FPT is constructed with a global feature extractor which accepts unordered source and target point-sets of variable size. The extracted features are conditioned by a shared multilayer perceptron point transformer module to predict a displacement vector for each source point, transforming it into the target space. The point transformer module assumes no vicinity or smoothness in predicting spatial transformation and, together with the global feature extractor, is trained in a data-driven fashion with an unsupervised loss function. In a multimodal registration task using prostate MR and sparsely acquired ultrasound images, FPT yields comparable or improved results over other rigid and non-rigid registration methods. This demonstrates the versatility of FPT to learn registration directly from real, clinical training data and to generalize to a challenging task, such as the interventional application presented

    Lung Ultrasound Segmentation and Adaptation Between COVID-19 and Community-Acquired Pneumonia

    Get PDF
    Lung ultrasound imaging has been shown effective in detecting typical patterns for interstitial pneumonia, as a point-of-care tool for both patients with COVID-19 and other community-acquired pneumonia (CAP). In this work, we focus on the hyperechoic B-line segmentation task. Using deep neural networks, we automatically outline the regions that are indicative of pathology-sensitive artifacts and their associated sonographic patterns. With a real-world data-scarce scenario, we investigate approaches to utilize both COVID-19 and CAP lung ultrasound data to train the networks; comparing fine-tuning and unsupervised domain adaptation. Segmenting either type of lung condition at inference may support a range of clinical applications during evolving epidemic stages, but also demonstrates value in resource-constrained clinical scenarios. Adapting real clinical data acquired from COVID-19 patients to those from CAP patients significantly improved Dice scores from 0.60 to 0.87 (p < 0.001) and from 0.43 to 0.71 (p < 0.001), on independent COVID-19 and CAP test cases, respectively. It is of practical value that the improvement was demonstrated with only a small amount of data in both training and adaptation data sets, a common constraint for deploying machine learning models in clinical practice. Interestingly, we also report that the inverse adaptation, from labelled CAP data to unlabeled COVID-19 data, did not demonstrate an improvement when tested on either condition. Furthermore, we offer a possible explanation that correlates the segmentation performance to label consistency and data domain diversity in this point-of-care lung ultrasound application

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io

    Learning image quality assessment by reinforcing task amenable data selection

    Get PDF
    In this paper, we consider a type of image quality assessment as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject those images that lead to poor accuracy in the target task. In this work, we show that the controller-predicted image quality can be significantly different from the task-specific image quality labels that are manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective image quality assessment without using a ``clean'' validation set, thereby avoiding the requirement for human labelling of images with respect to their amenability for the task. Using 67126712, labelled and segmented, clinical ultrasound images from 259259 patients, experimental results on holdout data show that the proposed image quality assessment achieved a mean classification accuracy of 0.94±0.010.94\pm0.01 and a mean segmentation Dice of 0.89±0.020.89\pm0.02, by discarding 5%5\% and 15%15\% of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective 0.90±0.010.90\pm0.01 and 0.82±0.020.82\pm0.02 from networks without considering task amenability. This enables image quality feedback during real-time ultrasound acquisition among many other medical imaging applications

    Development and evaluation of intraoperative ultrasound segmentation with negative image frames and multiple observer labels

    Get PDF
    When developing deep neural networks for segmenting intraoperative ultrasound images, several practical issues are encountered frequently, such as the presence of ultrasound frames that do not contain regions of interest and the high variance in ground-truth labels. In this study, we evaluate the utility of a pre-screening classification network prior to the segmentation network. Experimental results demonstrate that such a classifier, minimising frame classification errors, was able to directly impact the number of false positive and false negative frames. Importantly, the segmentation accuracy on the classifier-selected frames, that would be segmented, remains comparable to or better than those from standalone segmentation networks. Interestingly, the efficacy of the pre-screening classifier was affected by the sampling methods for training labels from multiple observers, a seemingly independent problem. We show experimentally that a previously proposed approach, combining random sampling and consensus labels, may need to be adapted to perform well in our application. Furthermore, this work aims to share practical experience in developing a machine learning application that assists highly variable interventional imaging for prostate cancer patients, to present robust and reproducible open-source implementations, and to report a set of comprehensive results and analysis comparing these practical, yet important, options in a real-world clinical application

    Adaptable image quality assessment using meta-reinforcement learning of task amenability

    Get PDF
    The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7 % % and 29.6 % % expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100 % % expert labels

    DeepReg: a deep learning toolkit for medical image registration

    Get PDF
    Image fusion is a fundamental task in medical image analysis and computer-assisted intervention. Medical image registration, computational algorithms that align different images together (Hill et al., 2001), has in recent years turned the research attention towards deep learning. Indeed, the representation ability to learn from population data with deep neural networks has opened new possibilities for improving registration generalisability by mitigating difficulties in designing hand-engineered image features and similarity measures for many realworld clinical applications (Fu et al., 2020; Haskins et al., 2020). In addition, its fast inference can substantially accelerate registration execution for time-critical tasks. DeepReg is a Python package using TensorFlow (Abadi et al., 2015) that implements multiple registration algorithms and a set of predefined dataset loaders, supporting both labelledand unlabelled data. DeepReg also provides command-line tool options that enable basic and advanced functionalities for model training, prediction and image warping. These implementations, together with their documentation, tutorials and demos, aim to simplify workflows for prototyping and developing novel methodology, utilising latest development and accessing quality research advances. DeepReg is unit tested and a set of customised contributor guidelines are provided to facilitate community contributions. A submission to the MICCAI Educational Challenge has utilised the DeepReg code and demos to explore the link between classical algorithms and deep-learning-based methods (Montana Brown et al., 2020), while a recently published research work investigated temporal changes in prostate cancer imaging, by using a longitudinal registration adapted from the DeepReg code (Yang et al., 2020)

    Image quality assessment for closed-loop computer-assisted lung ultrasound

    Get PDF
    We describe a novel, two-stage computer assistance system for lung anomaly detection using ultrasound imaging in the intensive care setting to improve operator performance and patient stratification during coronavirus pandemics. The proposed system consists of two deep-learning-based models. A quality assessment module automates predictions of image quality, and a diagnosis assistance module determines the likelihood-of-anomaly in ultrasound images of sufficient quality. Our two-stage strategy uses a novelty detection algorithm to address the lack of control cases available for training a quality assessment classifier. The diagnosis assistance module can then be trained with data that are deemed of sufficient quality, guaranteed by the closed-loop feedback mechanism from the quality assessment module. Integrating the two modules yields accurate, fast, and practical acquisition guidance and diagnostic assistance for patients with suspected respiratory conditions at the point-of-care. Using more than 25,000 ultrasound images from 37 COVID-19-positive patients scanned at two hospitals, plus 12 control cases, this study demonstrates the feasibility of using the proposed machine learning approach. We report an accuracy of 86% when classifying between sufficient and insufficient quality images by the quality assessment module. For data of sufficient quality, the mean classification accuracy in detecting COVID-19-positive cases was 95% on five holdout test data sets, unseen during the training of any networks within the proposed system

    Image quality assessment for machine learning tasks using meta-reinforcement learning

    Get PDF
    In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images
    corecore